The Dantzig selector and sparsity oracle inequalities
نویسنده
چکیده
and λ̂ := λ̂ ∈Argmin λ∈Λ̂ε ‖λ‖l1 . In the case where f∗ := fλ∗ , λ ∗ ∈ R , Candes and Tao [Ann. Statist. 35 (2007) 2313–2351] suggested using λ̂ as an estimator of λ. They called this estimator “the Dantzig selector”. We study the properties of fλ̂ as an estimator of f∗ for regression models with random design, extending some of the results of Candes and Tao (and providing alternative proofs of these results).
منابع مشابه
Rate Minimaxity of the Lasso and Dantzig Selector for the lq Loss in lr Balls
We consider the estimation of regression coefficients in a high-dimensional linear model. For regression coefficients in lr balls, we provide lower bounds for the minimax lq risk and minimax quantiles of the lq loss for all design matrices. Under an l0 sparsity condition on a target coefficient vector, we sharpen and unify existing oracle inequalities for the Lasso and Dantzig selector. We deri...
متن کاملHigh-dimensional stochastic optimization with the generalized Dantzig estimator
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle inequalities in prediction and estimation. We consider then the particular case of high-dimensional linear regression model selection with the Huber loss function. In this case we derive the sup-norm convergence rate and the sign concentration property of the Dantzig estimators under a mutual coh...
متن کاملThresholded Lasso for High Dimensional Variable Selection
Given n noisy samples with p dimensions, where n " p, we show that the multi-step thresholding procedure based on the Lasso – we call it the Thresholded Lasso, can accurately estimate a sparse vector β ∈ R in a linear model Y = Xβ + ", where Xn×p is a design matrix normalized to have column #2-norm √ n, and " ∼ N(0,σIn). We show that under the restricted eigenvalue (RE) condition (BickelRitov-T...
متن کاملTransductive versions of the LASSO and the Dantzig Selector
Transductive methods are useful in prediction problems when the training dataset is composed of a large number of unlabeled observations and a smaller number of labeled observations. In this paper, we propose an approach for developing transductive prediction procedures that are able to take advantage of the sparsity in the high dimensional linear regression. More precisely, we define transduct...
متن کاملThe Group Dantzig Selector
We introduce a new method — the group Dantzig selector — for high dimensional sparse regression with group structure, which has a convincing theory about why utilizing the group structure can be beneficial. Under a group restricted isometry condition, we obtain a significantly improved nonasymptotic `2-norm bound over the basis pursuit or the Dantzig selector which ignores the group structure. ...
متن کامل